Search Results: "Axel Beckert"

21 March 2012

Steve Kemp: My code makes it into GNU Screen, and now you can use it. Possibly.

Via Axel Beckert I learned today that GNU Screen is 25 years old, and although development is slow it has not ceased. Back in 2008 I started to post about some annoyances with GNU Screen. At the time I posted a simple patch to implement the unbindall primitive. I posted some other patches and fixed a couple of bugs, but although there was some positive feedback initially over time that ceased completely. Regretably I didn't have the feeling there was the need to maintain a fork properly, so I quietly sighed, cried, and ceased. In 2009 my code was moved upstream into the GNU Screen repository (+documentation update). We're now in 2012. It looks like there might be a stable release of GNU Screen in the near future, which makes my code live "for real", but in the meantime the recent snapshot upload to Debian Experimental makes it available to the brave. 2008 - 2012. Four years to make my change visible to end-users. If I didn't use screen every day, and still have my own local version, I'd have forgotten about that entirely. Still I guess this makes today a happy day! Wheee! ObQuote: "Thanks. For a while there I thought you were keeping it a secret. " - Escape To Victory

Axel Beckert: aptitude-gtk will likely vanish

As Christian already wrote, there s an Aptitude revival ongoing. We already saw this young team releasing aptitude 0.6.5 about 6 weeks ago, more commits have been made, and now we re heading towards an 0.6.6 release quickly. But this revival mostly covers the well-known and loved curses interface (TUI) of aptitude and not the seldomly installed GTK interface, which unfortunately never really took off: While aptitude itself (i.e. the curses and commandline interface) is installed on nearly 99% of all Debian installations which take part in Debian s Popularity Contest statistics, aptitude-gtk is only installed on 0.42% of all these installations. One reason is likely that aptitude-gtk still hasn t all the neat features of the curses interface. And another reason is probably that it s still quite buggy. Since nobody from the current Aptitude Team has the experience, leisure or time to resurrect (or even complete) aptitude-gtk, the plan is to stop building aptitude-gtk from the aptitude source package soon, i.e. to remove it from Debian for now. Like the even less finished Qt interface of aptitude, its code will stay in the VCS, but will be unmaintained unless someone steps up to continue aptitude-gtk (or aptitude-qt, or both), maybe even as its own source package. So if you like aptitude-gtk so much that you re still using it and want to continue using it, please think about contributing by joining the Aptitude Team and getting aptitude s GUI interface(s) back in shape. Another option would be to find a mentor so that resurrecting (one of) aptitude s GUI interfaces could become (again) a potential project at Debian s participation at Google s Summer of Code. Please direct any questions about aptitude-gtk or aptitude-qt to the Aptitude Development Mailing List. Or even better, join the discussion in this thread.

20 March 2012

Axel Beckert: Happy Birthday GNU Screen!

According to this Usenet posting, GNU Screen became 25 years old today. (Found via Fefe.) And no, it s not dead. In contrary, the reaction on the mailing list to bug fixes with patches is usually impressingly prompt. :-) I took this occassion and uploaded a current git snapshot of GNU Screen to Debian Experimental. Bug #644788 (screen 4.1.0 can t attach to a running or detached screen 4.0.3 session) is still an issue with that snapshot, but gladly upstream seems to work on a solution for it. There s even talk about a 4.1.0 beta release soon although that hasn t happened yet. Have fun!

19 March 2012

Jan Wagner: Chemnitzer Linuxtage 2012

As announced 3 weeks ago, the Debian project was present at Chemnitzer Linuxtage. Several talks and workshops where held by people related to the Debian project. At the booth we had talks and discussions with exhibitors and visitors, unfortunately I didn t had much time to visit more than small parts of two lectures. Unfortunately (for the visitors) we didn t had any merchandising on board, while we received several requests. On Sunday Axel surprised us with some leftovers from fosdem of debian.ch merch. At the booth we had a demo machine running Babelbox and xpenguins, which attracted visitors very well. Booth Babelbox We received also more than one Just thank you by satisfied users. :) Four different talks and one workshop were held by Debian people, but they were not specific to the Debian. The workshop was about OpenStreetMap, lectures was about commandline helpers, grep everything, quality analyzing and team management in opensource projects and Conkeror and other keyboard based webbrowsers. Many thanks to Jan Dittberner, Andreas Tille, Christian Hoffmann, Florian Baumann, Christoph Egger, Axel Beckert, Adam Schmalhofer, Markus Schnalke, Sebastian Harl and Patrick Matth i for running the booth, answering a wide range of questions or just chatting with visitors . A special thank to TMT GmbH & Co. KG for providing the complete equipment and sponsoring it s transportation. At the end we have to send a big thank to the organizing team of the Chemnitzer Linuxtage. It was fun and a pleasure to find new friends and meet old ones of the Free Software community. A small sidenote was anybody aware that OpenSuSE Package search is using screenshots.debian.net?

10 January 2012

Axel Beckert: Illegal attempt to re-initialise SSL for server (theoretically shouldn't happen!)

After dist-upgrading my main Hetzner server from Lenny to Squeeze, Apache failed to come up, barfing the following error message in the alphabetically last defined and enabled virtual host s error log:
[error] Illegal attempt to re-initialise SSL for server (theoretically shouldn't happen!)
Well this is not theory but the real world and it did happen and it took me a while to find out what was wrong with the configuration despite it worked with Lenny s Apache version. To avoid that others have to search as long as I had to, here s the solution: Look at all enabled sites, pick out those which have a VirtualHost on port 443 defined and verify that all these VirtualHost containers do have their own SSLEngine On statement. If at least one is missing, you ll run into the above mentioned error message. And it won t necessarily show up in the error log of those VirtualHosts which are missing the statement but only in the last VirtualHost (or the last VirtualHost on port 443). To find the relevant site files, I used the following one-liner:
grep -lE 'VirtualHost.*443' sites-enabled/*[^~]   \
  xargs grep -ci "SSLEngine On"   \
  grep :0
Should work for all sites which have defined just one VirtualHost on port 443 per file. I suspect that the raise of SNI made Apache s SSL implementation more picky with regards to VirtualHosts. Oh, and kudos to this comment to an article on Debian-Administration.org because it finally pointed me in the right direction. :-)

5 December 2011

Axel Beckert: automounter vs procmail

At work we use .procmailrc files generated by CGIpaf to let non-technical users create forwards, out-of-office mails, etc. and any combination thereof. This also has the advantage that we can filter out double bounces and spam (which also prevents us from being listed in spammer blacklists). Unfortunately autofs (seems independent if autofs4 or autofs5 is used) seems to be unreliable if there are bursts of mount or umount requests, resulting either in File or directory not found error message while trying to access the home directory of a user, or Directory not empty error messages if the automounter tries to remove the mount point after unmounting. In that case a not mounted directory owned by root is left over. In the end both cases lead to procmail behaving as if that user does not have a .procmailrc which looks like sporadically lost mails to those who forward all mails. (The mails then can be found in the local default INBOX for that user.) Additionally there are similar issues when the NFS servers are not available. The most effective countermeasure we found so far was adding tests to the global /etc/procmailrc to check if the user s home directory exists and belongs to the correct user:
# -----------------
# Global procmailrc
# -----------------
# For debugging, turn off if everything works well
VERBOSE=1
LOGFILE=/var/log/procmail.log
# This only works with bourne shells, $SHELL defaults to the user's
# login shell. And by experience dash seems not work, so we use bash.
OLDSHELL=$SHELL
SHELL=/bin/bash
# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $LOGNAME is not set for some reason. (Just to be sure our paths
# later on are not senseless.
:0
* ? test -z "$LOGNAME"
 
    LOG="Expected variable LOGNAME not set. "
    EXITCODE=75
    :0
    /dev/null
 
# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $HOME is not readable. ~$LOGNAME does not seem to work, so this uses
# a hard wired /home/.
:0
* ? test ! -r /home/$LOGNAME
 
    LOG="Home of user $LOGNAME not readable: /home/$LOGNAME "
    EXITCODE=75
    :0
    /dev/null
 
# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $HOME has wrong owner. ~$LOGNAME does not seem to work, so this uses
# a hard wired /home/.
:0
* ? test ! -O /home/$LOGNAME
 
    LOG="Home of user $LOGNAME has wrong owner: /home/$LOGNAME "
    EXITCODE=75
    :0
    /dev/null
 
[ ]
If you want to store a copy of these mails for debugging purposes on every delivery attempt, replace /dev/null with some Maildir or mbox only accessible for root. One small but important part was to explicitly declare bash as shell for executing the tests, otherwise mails for users with tcsh or zsh as login shell filled up the mail queue and never get delivered (if the SHELL variable never gets fixed). Only drawback so far: This leads to more lagging e-mail on e-mail bursts also for those users who have no .procmailrc because procmail can t check if there s really no .procmailrc. Extensive procmail documentation can be found online at the Procmail Documentation Project as well as in the man pages procmail(1), procmailrc(5) and procmailex(5).

13 November 2011

Axel Beckert: grep everything

During the OpenRheinRuhr I noticed that a friend of mine didn t know about zgrep and friends. So I told him what other grep variations I know and he told me about some grep variations I didn t know about. So here s our collection of grep wrappers, derivatives and variations. First I ll list programs which search for text in different file formats:
grep through whatFixed StringsWildcards / Basic RegExpsExtended RegExpsDebian package
uncompressed text filesfgrepgrepegrepgrep
gzip-compressed text fileszfgrepzgrepzegrepzutils, gzip
bzip2-compressed text filesbzfgrepbzgrepbzegrepbzip2
xz-compressed text filesxzfgrepxzgrepxzegrepxz-utils
uncompressed text files in installed Debian packagesdfgrepdgrepdegrepdebian-goodies
gzip-compressed text files in installed Debian packages-dzgrep-debian-goodies
PDF documents--pdfgreppdfgrep
POD textspodgrep--pmtools
E-Mail folder (mbox, MH, Maildir)-mboxgrep -Gmboxgrep -Emboxgrep
Patches-grepdiffgrepdiff -Epatchutils
Process list--pgrepprocps
Gnumeric spreadsheetsssgrep -Fssgrep?gnumeric
Files in ZIP archives--zipgrepunzip
ID3 tags in MP3s--taggreppertaggrepper
Network packets--ngrepngrep
Tar archives--targrep / ptargrepperl (Experimental only for now)
And then there are also greps for special patterns on more or less normal files:
grep for whatuncompressed filescompressed filesDebian package
PCRE (Perl Compatible Regular Expression)pcregrep (see also the grep -P option)zpcregreppcregrep
IP Address in a given CIDR rangegrepcidr-grepcidr
XPath expressionxml_grep-xml-twig-tools
One question is though still unanswered for us: Is there some kind of meta-grep which chooses per file the right grep from above by looking at the MIME type of the according files, similar to xdg-open. Other tools which have grep in their name, but are too special to properly fit into the above lists: Includes contributions by Frank Hofmann and Faidon Liambotis.

27 October 2011

Axel Beckert: Conkeror usable on Ubuntu again despite XULRunner removal

Because of the very annoying new Mozilla release politics (which look like a pissing contest with the similar annoying Google Chrome/Chromium release schedule), Ubuntu kicked out Mozilla XULRunner with its recent release of 11.10 Oneiric. And with XULRunner, Ubuntu also kicked out Conkeror and all other XULRunner reverse dependencies, too. Meh. Sparked by this thread on the Conkeror mailing list, I extended the Debian package s /usr/bin/conkeror wrapper script so that it looks for firefox in the search path, too, if no xulrunner* is found, and added an alternative dependency on firefox versions greater or equal to 3.5, too. From now on, if the wrapper script finds no xulrunner but firefox in the search path, it calls firefox -app instead of xulrunner-$VERSION to start Conkeror. With the expection of the about:-page showing the orange-blue Firefox logo and claiming that this is Firefox $CONKEROR_VERSION , it works as expected on my Toshiba AC100 netbook running the armel port of Ubuntu 11.10. From version 1.0~~pre+git1110272207-~nightly1 on, the Conkeror Nightly Built Debian Packages will be installable on Ubuntu 11.10 Oneiric again without the need to install or keep XULRunner version from Ubuntu 11.04 Natty. For those who don t want to use the nightly builds, I created a (currently still empty) specific PPA for Conkeror where I ll probably upload all the conkeror packages I upload to Debian Unstable.

9 October 2011

Axel Beckert: Git Snapshot of GNU Screen in Debian Experimental

I just uploaded a snapshot of GNU Screen to Debian Experimental. The package (4.1.0~20110819git450e8f3-1) is based on upstream s HEAD whose most recent commit currently dates to the 19th of August 2011. While the upload fixes tons of bugs which accumulated over the past two years in Debian s, Ubuntu s and upstream s bug tracker, I don t yet regard it as suitable for the next stable release (and hence for Debian Unstable) since there s one not so nice issue about it: Nevertheless it fixes a lot of open issues (of which the oldest is a wishlist bug report dating back to 1998 :-) and I didn t want to withhold it from the rest of the Debian community so I uploaded it to Debian Experimental. Issues closed in Debian ExperimentalIssues which will be closed in UbuntuPlease test the version from Experimental If you are affected by one of the issues mentioned above, please try the version from Debian Experimental and check if they re resolved for you, too. Thanks to all who contributed! A lot of the fixes have been made or applied upstream by Sadrul Habib Chowdhury who also industriously tagged Debian bug reports as fixed-upstream . Thanks! Thanks also to Brian P Kroth who gave the initial spark to this upload by packaging Fedora 15 s git snapshot for Debian and filing bug although the upload is based on the current HEAD version of GNU Screen as this fixes some more important issues than the snapshot Fedora 15 includes. That way also two patches from Fedora/RedHat s screen package are included in this upload. (Co-) Maintainer wanted! Oh, and if you care about the state of GNU Screen in Debian, I d really appreciate if you d join in and contribute to our collab-maint git repository there are still a lot of issues unresolved and I know that I won t be able to fix all of them myself. And since Hessophanes unfortunately currently has not enough time for the package, we definitely need more people maintaining this package. P.S. Yes, I know about tmux and tried to get some of my setups working with it, too. But I still prefer screen over tmux. :-)

30 September 2011

Axel Beckert: Fun facts from the UDD

After spotting an upload of mira, who in turn spotted an upload of abe (the package, not an upload by me aka abe@d.o), mira (mirabilos aka tg@d.o) noticed that there are Debian packages which have same name as some Debian Developers have as login name. Of course I noticed a long time ago that there is a Debian package with my login name abe . Another well-known Debian login and former package name is amaya. But since someone else came up with that thought, too, it was time for finding the definite answer to the question which are the DD login names which also exist as Debian package names. My first try was based on the list of trusted GnuPG keys:
$ apt-cache policy $(gpg --keyring /etc/apt/trusted.gpg --list-keys 2>/dev/null   \
                     grep @debian.org   \
        	     awk -F'[<@]' ' print $2 '   \
                     sort -u) 2>/dev/null   \
                   egrep -o '^[^ :]*'
alex
tor
ed
bam
ng
But this was not satisfying as my own name didn t show up and gpg also threw quite a lot of block reading errors (which is also the reason for redirecting STDERR). mira then had the idea of using the Ultimate Debian Database to answer this question more properly:
udd=> SELECT login, name FROM carnivore_login, carnivore_names
      WHERE carnivore_login.id=carnivore_names.id AND login IN
      (SELECT package AS login FROM packages, active_dds
       WHERE packages.package=active_dds.login UNION
       SELECT source AS name FROM sources, active_dds
       WHERE sources.source=active_dds.login)
      ORDER BY login;
 login                   name
-------+---------------------------------------
 abe     Axel Beckert
 alex    Alexander List
 alex    Alexander M. List  4402020774 9332554
 and     Andrea Veri
 ash     Albert Huang
 bam     Brian May
 ed      Ed Boraas
 ed      Ed G. Boraas [RSA Compatibility Key]
 ed      Ed G. Boraas [RSA]
 eric    Eric Dorland
 gq      Alexander GQ Gerasiov
 iml     Ian Maclaine-cross
 lunar   J r my Bobbio
 mako    Benjamin Hill
 mako    Benjamin Mako Hill
 mbr     Markus Braun
 mlt     Marcela Tiznado
 nas     Neil A. Schemenauer
 nas     Neil Schemenauer
 opal    Ola Lundkvist
 opal    Ola Lundqvist
 paco    Francisco Moya
 paul    Paul Slootman
 pino    Pino Toscano
 pyro    Brian Nelson
 stone   Fredrik Steen
(26 rows)
Interestingly tor (Tor Slettnes) is missing in this list, so it s not complete either At least I m quite sure that nobody maintains a package with his own login name as package name. :-) We also have no packages ending in -guest , so there s no chance that a package name matches an Alioth guest account either

22 September 2011

Axel Beckert: Emacs Macros: Repeat on Steroids

vi users have their . (dot) redo command for repeating the last command. The article Repeating Commands in Emacs in Mickey Petersen s blog Mastering Emacs explained Emacs equivalent for that, namely the command repeat, by default bound to C-x z. I though seldomly use it as I mostly have to repeat a chain of commands. What I use are so called Keyboard Macros. For example for the CVE-2011-3192 vulnerability in Apache I added a line like Include /etc/apache2/sites-common/CVE-2011-3192.conf to all VirtualHosts. So I started Emacs with all the relevant files: grep CVE-2011-3192 -l /etc/apache2/sites-available/*[^~] xargs emacs & To remove those Include lines again M-x flush-lines is probably the easiest way in Emacs. So for every file I had to call flush-lines with always the same parameter, save the buffer and then close the file or in Emacsish kill the buffer. So while working on the first file I recorded my doing as a keyboard macro:
C-x (
Start recording
M-x flush-lines<Enter>CVE-2011-3192<Enter>
flush all lines which contain the string CVE-2011-3192
C-x C-s
save the current buffer
C-x C-k<Enter>
kill the current buffer, i.e. close the file
C-x )
Stop recording
Then I just had to call the saved macro with C-x e. It flushed all lines, saved the changes and switched to the next remaining file by closing the current file with three key-strokes. And to make it even easier, from the second occasion on I only had to press e to call the macro directly again. So I just pressed e for a bunch of time and had all files edited. (In this case I used git diff afterwards to check that I didn t wreck anything by half-automating my editing. :-) Of course there are other ways to do this, too, e.g. use sed or so, but I still think it s a neat example for showing the power of keyboard macros in Emacs. More things you can do with Emacs Keyboard Macros are described in the EmacsWiki entry Keyboard Macros. And if you still miss vi s . command in Emacs, you can use the dot-mode, an Emacs mode currently maintained by Robert Wyrick which more or less automatically defines keyboard macros and lets you call them with C-..

20 September 2011

Axel Beckert: Creative Toilet Paper Usage in Webcomics

Funnily two of my daily web comics recently featured interesting things you could do with toilet paper: Zits on 19th of September 2011 involving a fan and Calvin and Hobbes on 13th of September 2011 involving flushing the toilet. Although both experiments are obviously resource wasting, they look like quite some fun and I m tempted to actually try them both at least once. (I though don t plan to try this, too. :-)

31 August 2011

Axel Beckert: Useful but Unknown Unix Tools: How wdiff and colordiff help to choose the right Swiss Army Knife

In light of the fact that it seems possible to fit the plastic caps of a Debian branded Swiss Army Knife (Last orders today!) on an existing Swiss Army Knife (German written howto as PDF), I started to think about which Victorinox Cybertool would be the best fitting for me. And because the Victorinox comparison page doesn t really show diffs, just columns with floating text which are not very helpful for generating diffs in your head, I used command line tools for that purpose: wdiff Because the floating texts are not line- but just whitespace-based, the tool of choice is not diff but wdiff, a word-based diff. It encloses additions and removals in + + and [- -] blocks. (No, those aren t Japanese smileys although they look a lot like some. ^^). The easiest and clearest way is to copy and paste the texts from Victorinox comparison page into some text files and compare them with wdiff:
$ wdiff cybertool34.txt cybertool41.txt
+Schraubendreher 2.5mm,+ Pinzette, N hahle mit Nadel hr, +Holzs ge,+ Bit-Schl ssel( 5 mm Innensechskant f r die D-SUB Steckverbinder, 4 mm Innensechskant f r Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( H lsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Pakettr ger), +Metalls ge( Metallfeile, Nagelfeile, Nagelreiniger ),+ Dosen ffner( kleiner Schraubendreher ), Kleine Klinge, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), +Holzmeissel / Schaber,+ Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher
So this already extracted the information which are the seven tools which are in the Cybertool 41, but not in the Cybertool 34. Nevertheless the diff is still not easily recognizable on the first glance. There are several ways to help here. First wdiff has an option --no-common (the according short option is -3) which just shows added and removed words:
$ wdiff -3 cybertool34.txt cybertool41.txt
======================================================================
 +Schraubendreher 2.5mm,+ 
======================================================================
  +Holzs ge,+ 
======================================================================
  +Metalls ge( Metallfeile, Nagelfeile, Nagelreiniger ),+ 
======================================================================
  +Holzmeissel / Schaber,+ 
======================================================================
This is already way better to quickly recognize the actual differences. But if you still also want to see the common tools of the two knifes you need some visual help: One option is to use wdiff s --terminal (or short -t) option. Added words are then displayed inverse and removed words are shown underlined (background and foreground colors hardcoded as there is no invert colors style in CSS or HTML):

$ wdiff -t cybertool34.txt cybertool41.txt
Schraubendreher 2.5mm, Pinzette, N hahle mit Nadel hr, Holzs ge, Bit-Schl ssel( 5 mm Innensechskant f r die D-SUB Steckverbinder, 4 mm Innensechskant f r Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( H lsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Pakettr ger), Metalls ge( Metallfeile, Nagelfeile, Nagelreiniger ), Dosen ffner( kleiner Schraubendreher ), Kleine Klinge, Druckkugelschreiber, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), Holzmeissel / Schaber, Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher

But some still like to to use color instead of the contrast-rich inverse and the easily to oversee underlining. This is where colordiff comes into play: colordiff colordiff is like syntax highlighting for diffs on the command line. I works with classic and unified diffs as well as with wdiffs and debdiffs (the debdiff command is part of the devscripts package).
$ wdiff cybertool34.txt cybertool41.txt colordiff
+Schraubendreher 2.5mm,+ Pinzette, N hahle mit Nadel hr, +Holzs ge,+ Bit-Schl ssel( 5 mm Innensechskant f r die D-SUB Steckverbinder, 4 mm Innensechskant f r Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( H lsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Pakettr ger), +Metalls ge( Metallfeile, Nagelfeile, Nagelreiniger ),+ Dosen ffner( kleiner Schraubendreher ), Kleine Klinge, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), +Holzmeissel / Schaber,+ Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher
$ wdiff cybertool29.txt cybertool41.txt colordiff
+Schraubendreher 2.5mm,+ Pinzette, N hahle mit Nadel hr, +Holzs ge,+ Bit-Schl ssel( 5 mm Innensechskant f r die D-SUB Steckverbinder, 4 mm Innensechskant f r Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), +Kombizange( H lsenpresser, Drahtschneider ),+ Stech-Bohrahle, +Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Pakettr ger), Metalls ge( Metallfeile, Nagelfeile, Nagelreiniger ),+ Dosen ffner( kleiner Schraubendreher ), Kleine Klinge, [-Druckkugelschreiber,-] Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), +Holzmeissel / Schaber,+ Bit-Halter, Stecknadel, inox, +Schere,+ Korkenzieher, Zahnstocher
(Coloured Screenshots done with ANSI HTML Adapter from the package aha.) Some, especially those who are used to git, are probably confused by the default choice of diff colors. This is easily fixable by writing the following into you ~/.colordiffrc:
newtext=green
oldtext=red
diffstuff=darkblue
cvsstuff=darkyellow
(See also /etc/colordiff for the defaults and hints.) colordiff has by the way two operating modes: So now let us compare the Cybertool 29 with Cybertool 34 in a normal diff (by using the texts from above and replacing all commata with newline characters) with git-like colors:
$ colordiff cybertool29-lines.txt cybertool34-lines.txt
12a13,14
> Kombizange( H lsenpresser
> Drahtschneider )
13a16,17
> Kugelschreiber( auch zum DIP-Switch verstellen )
> Mehrzweckhaken (Pakettr ger)
16d19
< Druckkugelschreiber
25a29
> Schere
Or as unifed diff with some context:
$ colordiff -u cybertool29-lines.txt cybertool34-lines.txt
--- cybertool29-lines.txt     2011-08-31 20:55:37.195546238 +0200
+++ cybertool34-lines.txt   2011-08-31 20:55:11.667710504 +0200
@@ -10,10 +10,13 @@
 Bit Torx 8
 Bit Torx 10
 Bit Torx 15 )
+Kombizange( H lsenpresser
+Drahtschneider )
 Stech-Bohrahle
+Kugelschreiber( auch zum DIP-Switch verstellen )
+Mehrzweckhaken (Pakettr ger)
 Dosen ffner( kleiner Schraubendreher )
 Kleine Klinge
-Druckkugelschreiber
 Grosse Klinge
 Ring
 inox
@@ -23,5 +26,6 @@
 Bit-Halter
 Stecknadel
 inox
+Schere
 Korkenzieher
 Zahnstocher
So if you want nicely colored diffs with Subversion like you re used to with git, you can use svn diff colordiff.

29 August 2011

Axel Beckert: SSH Multiplexer: parallel-ssh

There are many SSH multiplexers in Debian and most of them have one or two features which make them unique and especially useful for that one use case. I use some of them regularily (I even maintain the Debian package of one of them, namely pconsole :-) and I ll present then and when one of them here. For non-interactive purposes I really like parallel-ssh aka pssh. It takes a file of hostnames and a bunch of common ssh parameters as parameters, executes the given command in parallel in up to 32 threads (by default, adjustable with -p) and waits by default for 60 seconds (adjustable with -t). For example to restart hobbit-client on all hosts in kiva.txt, the following command is suitable:
$ parallel-ssh -h kiva.txt -l root /etc/init.d/hobbit-client restart
[1] 19:56:03 [FAILURE] kiva6 Exited with error code 127
[2] 19:56:04 [SUCCESS] kiva
[3] 19:56:04 [SUCCESS] kiva4
[4] 19:56:04 [SUCCESS] kiva2
[5] 19:56:04 [SUCCESS] kiva5
[6] 19:56:04 [SUCCESS] kiva3
[7] 19:57:03 [FAILURE] kiva1 Timed out, Killed by signal 9
(Coloured Screenshots done with ANSI HTML Adapter from the package aha.) You easily see on which hosts the command failed and partially also why: On kiva6 hobbit-client is not installed and therefore the init.d script is not present. kiva1 is currently offline so the ssh connection timed out. If you want to see the output of the commands, you have a two choices. Which one to choose depends on the expected amount of output: If you don t expect a lot of output, the -i (or --inline) option for inline aggregated output is probably the right choice:
$ parallel-ssh -h kiva.txt -l root -t 10 -i uptime
[1] 20:30:20 [SUCCESS] kiva
 20:30:20 up 7 days,  5:51,  0 users,  load average: 0.12, 0.08, 0.06
[2] 20:30:20 [SUCCESS] kiva2
 20:30:20 up 7 days,  5:50,  0 users,  load average: 0.19, 0.08, 0.02
[3] 20:30:20 [SUCCESS] kiva3
 20:30:20 up 7 days,  5:49,  0 users,  load average: 0.10, 0.06, 0.06
[4] 20:30:20 [SUCCESS] kiva4
 20:30:20 up 7 days,  5:49,  0 users,  load average: 0.25, 0.17, 0.14
[5] 20:30:20 [SUCCESS] kiva6
 20:30:20 up 7 days,  5:49, 10 users,  load average: 0.16, 0.08, 0.02
[6] 20:30:21 [SUCCESS] kiva5
 20:30:21 up 7 days,  5:49,  0 users,  load average: 3.11, 3.36, 3.06
[7] 20:30:29 [FAILURE] kiva1 Timed out, Killed by signal 9
If you expect a lot of output you can give directories with the -o (or --outdir) and -e (or --errdir) option:
$ parallel-ssh -h kiva.txt -l root -t 20 -o kiva-output lsb_release -a
[1] 20:36:51 [SUCCESS] kiva
[2] 20:36:51 [SUCCESS] kiva2
[3] 20:36:51 [SUCCESS] kiva3
[4] 20:36:51 [SUCCESS] kiva4
[5] 20:36:53 [SUCCESS] kiva6
[6] 20:36:54 [SUCCESS] kiva5
[7] 20:37:10 [FAILURE] kiva1 Timed out, Killed by signal 9
$ ls -l kiva-output
total 24
-rw-r--r-- 1 abe abe  98 Aug 28 20:36 kiva
-rw-r--r-- 1 abe abe   0 Aug 28 20:36 kiva1
-rw-r--r-- 1 abe abe  98 Aug 28 20:36 kiva2
-rw-r--r-- 1 abe abe  98 Aug 28 20:36 kiva3
-rw-r--r-- 1 abe abe  98 Aug 28 20:36 kiva4
-rw-r--r-- 1 abe abe 102 Aug 28 20:36 kiva5
-rw-r--r-- 1 abe abe 100 Aug 28 20:36 kiva6
$ cat kiva-output/kiva5
Distributor ID:	Debian
Description:	Debian GNU/Linux 6.0.2 (squeeze)
Release:	6.0.2
Codename:	squeeze
The only annoying thing IMHO is that the host list needs to be in a file. With zsh, bash and the original ksh (but neither tcsh, pdksh nor mksh), you can circumvent this restriction with one of the following command lines:
$ parallel-ssh -h <(printf "host1\nhost2\nhost3\n ") -l root uptime
[ ]
$ parallel-ssh -h <(echo host1 host2 host3     xargs -n1) -l root uptime
[ ]
In addition to parallel-ssh the pssh package also contains some more ssh based tools: I though think that parallel-ssh is by far the most useful tool from the pssh package. (Probably no wonder as it s the most generic one. :-)

28 August 2011

Axel Beckert: Useful but Unknown Unix Tools: watch

Yet another useful tool of which at least I heard quite late in my Unix career is watch . For a long time I wrote one-liners like this to monitor the output of a command:
while :; do echo -n " date  "; host bla nameserver; sleep 2; done
But it s way shorter and less error-prone to use watch from Debian s procps package and just write
watch host bla nameserver
The only relevant difference is that I don t have some kind of history when the output of the command changed, e.g. to calculate the rate with which a file grows. You can even track the output of more than one command:
watch 'ps aux   grep resize2fs; df -hl'
Also a nice way to use watch is to run it inside GNU Screen (or tmux or splitvt) and split up the terminal horizontally, i.e. show the output of watch in one window and the process you re tracking with the commands run by watch in the other window and see both running at the same time. Update, Sunday, 28th of August 2011, 17:13h I never found a useful case for watch s -d option which highlights changes to the previous run (by inverting the changed bytes), but until now three people pointed out the -d option in response to this blog-posting and weasel also had some nice examples, so here are they: Keep an eye on the current network routes (once per second) of a host and quickly notice when they change:
watch -n1 -d ip r
Watch the current directory for size or time stamp changes of its files:
watch -d ls -l
The option -d only highlights changes from the previous run to the next run. If you want to see all bytes which ever changed since the first run, use --differences=cumulative. Thanks to Klaus Mowgli Ethgen, Ulrich mru Dangel, Uli youam Martens and Peter weasel Palfrader for comments and suggestions.

27 August 2011

Axel Beckert: Useful but Unknown Unix Tools: Calculating with IPs, The Sequel

This is a direct followup on my previous blog posting about calculating IPs and netmasks with the tools netmask and prips. Kurt Roeckx (via e-mail) and Niall Donegan (via a comment to that blog posting) both told me about the package sipcalc, and Kurt also mentioned the package ipcalc. Thanks for that! And since I found both useful, too, let s put them in their own blog posting: Both tools, ipcalc and sipcalc offer a get all information at once mode which are not present in the previously presented tool netmask. ipcalc ipcalc by default outputs all information and even in ANSI colors:
$ ipcalc 192.168.96.0/21
Address:   192.168.96.0         11000000.10101000.01100 000.00000000
Netmask:   255.255.248.0 = 21   11111111.11111111.11111 000.00000000
Wildcard:  0.0.7.255            00000000.00000000.00000 111.11111111
=>
Network:   192.168.96.0/21      11000000.10101000.01100 000.00000000
HostMin:   192.168.96.1         11000000.10101000.01100 000.00000001
HostMax:   192.168.103.254      11000000.10101000.01100 111.11111110
Broadcast: 192.168.103.255      11000000.10101000.01100 111.11111111
Hosts/Net: 2046                  Class C, Private Internet
(Coloured Screenshots done with ANSI HTML Adapter from the package aha.) You can suppress the bitwise option or directly output HTML via commandline options. For example ipcalc -b -h 192.168.96.0/21 outputs the following content:
Address: 192.168.96.0
Netmask: 255.255.248.0 = 21
Wildcard: 0.0.7.255
=>
Network: 192.168.96.0/21
HostMin: 192.168.96.1
HostMax: 192.168.103.254
Broadcast: 192.168.103.255
Hosts/Net: 2046 Class C, Private Internet
Yes, that s an HTML table and no preformatted text, just with a monospaced font. (I just removed the hardcoded text color from it, otherwise it would not look nice on dark backgrounds like in Planet Commandline s default color scheme.) Like netmask, ipcalc can also deaggregate IP ranges into largest possible networks:
$ ipcalc 192.168.87.0 - 192.168.110.255
deaggregate 192.168.87.0 - 192.168.110.255
192.168.87.0/24
192.168.88.0/21
192.168.96.0/21
192.168.104.0/22
192.168.108.0/23
192.168.110.0/24
(ipcalc -r 192.168.87.0 192.168.110.255 is just another way to write this, and it results in the same output.) To find networks with at least 20, 63 and 30 IP addresses within a /24 network, use for example:
Address:   192.0.2.0            
Netmask:   255.255.255.0 = 24   
Wildcard:  0.0.0.255            
=>
Network:   192.0.2.0/24         
HostMin:   192.0.2.1            
HostMax:   192.0.2.254          
Broadcast: 192.0.2.255          
Hosts/Net: 254                   Class C
1. Requested size: 20 hosts
Netmask:   255.255.255.224 = 27 
Network:   192.0.2.128/27       
HostMin:   192.0.2.129          
HostMax:   192.0.2.158          
Broadcast: 192.0.2.159          
Hosts/Net: 30                    Class C
2. Requested size: 63 hosts
Netmask:   255.255.255.128 = 25 
Network:   192.0.2.0/25         
HostMin:   192.0.2.1            
HostMax:   192.0.2.126          
Broadcast: 192.0.2.127          
Hosts/Net: 126                   Class C
3. Requested size: 30 hosts
Netmask:   255.255.255.224 = 27 
Network:   192.0.2.160/27       
HostMin:   192.0.2.161          
HostMax:   192.0.2.190          
Broadcast: 192.0.2.191          
Hosts/Net: 30                    Class C
Needed size:  192 addresses.
Used network: 192.0.2.0/24
Unused:
192.0.2.192/26
sipcalc sipcalc is similar to ipcalc. One big difference seems to be the IPv6 support:
$ sipcalc 2001:DB8::/32
-[ipv6 : 2001:DB8::/32] - 0
[IPV6 INFO]
Expanded Address        - 2001:0db8:0000:0000:0000:0000:0000:0000
Compressed address      - 2001:db8::
Subnet prefix (masked)  - 2001:db8:0:0:0:0:0:0/32
Address ID (masked)     - 0:0:0:0:0:0:0:0/32
Prefix address          - ffff:ffff:0:0:0:0:0:0
Prefix length           - 32
Address type            - Aggregatable Global Unicast Addresses
Network range           - 2001:0db8:0000:0000:0000:0000:0000:0000 -
                          2001:0db8:ffff:ffff:ffff:ffff:ffff:ffff
(Thanks to Niall for the pointer to RFC3849. :-) It can also split up networks into smaller chunks, but only same-size chunks, like e.g. split a /32 IPv6 network into /34 networks:
sipcalc -S34 2001:DB8::/32
-[ipv6 : 2001:DB8::/32] - 0
[Split network]
Network                 - 2001:0db8:0000:0000:0000:0000:0000:0000 -
                          2001:0db8:3fff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:4000:0000:0000:0000:0000:0000 -
                          2001:0db8:7fff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:8000:0000:0000:0000:0000:0000 -
                          2001:0db8:bfff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:c000:0000:0000:0000:0000:0000 -
                          2001:0db8:ffff:ffff:ffff:ffff:ffff:ffff
-
Similar thing with IPv4:
sipcalc -s27 192.0.2.0/24
-[ipv4 : 192.0.2.0/24] - 0
[Split network]
Network                 - 192.0.2.0       - 192.0.2.31
Network                 - 192.0.2.32      - 192.0.2.63
Network                 - 192.0.2.64      - 192.0.2.95
Network                 - 192.0.2.96      - 192.0.2.127
Network                 - 192.0.2.128     - 192.0.2.159
Network                 - 192.0.2.160     - 192.0.2.191
Network                 - 192.0.2.192     - 192.0.2.223
Network                 - 192.0.2.224     - 192.0.2.255
sipcalc also has a show me all information mode with the -a option:
$ sipcalc -a 192.168.96.0/21
-[ipv4 : 192.168.96.0/21] - 0
[Classfull]
Host address            - 192.168.96.0
Host address (decimal)  - 3232260096
Host address (hex)      - C0A86000
Network address         - 192.168.96.0
Network class           - C
Network mask            - 255.255.255.0
Network mask (hex)      - FFFFFF00
Broadcast address       - 192.168.96.255
[CIDR]
Host address            - 192.168.96.0
Host address (decimal)  - 3232260096
Host address (hex)      - C0A86000
Network address         - 192.168.96.0
Network mask            - 255.255.248.0
Network mask (bits)     - 21
Network mask (hex)      - FFFFF800
Broadcast address       - 192.168.103.255
Cisco wildcard          - 0.0.7.255
Addresses in network    - 2048
Network range           - 192.168.96.0 - 192.168.103.255
Usable range            - 192.168.96.1 - 192.168.103.254
[Classfull bitmaps]
Network address         - 11000000.10101000.01100000.00000000
Network mask            - 11111111.11111111.11111111.00000000
[CIDR bitmaps]
Host address            - 11000000.10101000.01100000.00000000
Network address         - 11000000.10101000.01100000.00000000
Network mask            - 11111111.11111111.11111000.00000000
Broadcast address       - 11000000.10101000.01100111.11111111
Cisco wildcard          - 00000000.00000000.00000111.11111111
Network range           - 11000000.10101000.01100000.00000000 -
                          11000000.10101000.01100111.11111111
Usable range            - 11000000.10101000.01100000.00000001 -
                          11000000.10101000.01100111.11111110
[Networks]
Network                 - 192.168.96.0    - 192.168.103.255 (current)
Thanks again to Kurt and Niall for their contributions!

Now listening to the schreimaschine and fausttanz submissions for the interactive competition at the B nzli/DemoDays in Olten (Switzerland)

Axel Beckert: Useful but Unknown Unix Tools: Calculating with IPs

There are two small CLI tools I need often when I m handling larger networks or more than a few IP addresses at once: netmask netmask is very handy for calculating with netmasks (anyone expected something else? ;-) in all variants:
$ netmask 192.168.96.0/255.255.248.0
    192.168.96.0/21
$ netmask -s 192.168.96.0/21
    192.168.96.0/255.255.248.0  
$ netmask --range 192.168.96.0/21
    192.168.96.0-192.168.103.255  (2048)
$ netmask 192.168.96.0:192.168.103.255
    192.168.96.0/21
$ netmask 192.168.87.0:192.168.110.255
    192.168.87.0/24
    192.168.88.0/21
    192.168.96.0/21
   192.168.104.0/22
   192.168.108.0/23
   192.168.110.0/24
$ netmask --cisco 192.168.96.0/21
    192.168.96.0 0.0.7.255
(The IP ranges in RFC5737 where too small for the examples I had in mind. :-) There s though one thing netmask can t do out of the box and that s where the second tool comes into play: prips When I read the package name prips, I always think of something like print postscript or so, but it s actually an abbreviation for print IPs . And that s all it does:
$ prips 192.0.2.0/29
192.0.2.0
192.0.2.1
192.0.2.2
192.0.2.3
192.0.2.4
192.0.2.5
192.0.2.6
192.0.2.7
$ prips 198.51.100.1 198.51.100.6
198.51.100.1
198.51.100.2
198.51.100.3
198.51.100.4
198.51.100.5
198.51.100.6
$ prips -i 2 203.0.113.0/28
203.0.113.0
203.0.113.2
203.0.113.4
203.0.113.6
203.0.113.8
203.0.113.10
203.0.113.12
203.0.113.14
$ prips -f hex 192.0.2.8/29
c0000208
c0000209
c000020a
c000020b
c000020c
c000020d
c000020e
c000020f
prips has proven to be very useful in combination with shell loops like these:
$ prips 192.0.2.0/29   xargs -n 1 host
[ ]
$ for ip in  prips 198.51.100.1 198.51.100.6 ; do host $ip; done
[ ]
And since prips doesn t support the 192.0.2.0/255.255.255.248 netmask syntax, you can even easily combine those two tools:
$ prips  netmask 192.0.2.0/255.255.255.248 
[ ]
(Hah! Now I was able to use RFC5737 IP ranges! ;-)

26 August 2011

Axel Beckert: Useful but Unknown Unix Tools: Kill all processes of a user

I already got mails like What a pity that your nice blog posting series ended . No, it didn t end. As announced, I knew that I won t be able to keep up a daily schedule. It worked as long as I had already written the postings in advanced. But in the end the last postings were already written just in time and then I ran out of leisure and muse for a time. But as I said: It didn t end, it will be continued. And this is the next such posting. Oh, and for those who tell me further tools, I should blog about: I appreciate that, especially because that way I also hear about tools I didn t know about. But why just telling me and not blogging yourself about it? :-) At least those whose blog is part of Planet Debian or Planet Symlink anyway really should do this themselves. I d really like to see also others writing about cool tools. I neither have a right on the idea nor on the name of this series (call it meme if you want :-), so please go on and publish your favourite tools in a blog posting, too. :-) And for all those who want to join me and Myon blogging about cool Unix tools, independent if listed on Planet Debian or Planet Symlink, I encourage you to offer a separate feed for this kind of postings and join us on Planet Commandline. Anyway, here s the next such posting: As system administrator you often have the case that you have to kill all processes of one user, e.g. if a daemon didn t properly shut down itself or amok running leftovers of a GUI session. Many use pkill -SIGNAL -u user from the procps package or killall -SIGNAL -u user from the psmisc package for it. But that s a) quite cumbersome to type and b) is there a chance to forget about the -u and then bad things may happen, especially with pkill s default substring match, so I prefer another tool with a more explicit name:

slay slay has an easy to remember name (at least for BOFHs ;-) which is even quicker to type (alternating one character with the left and the right hand, at least on US layout keyboards) than pkill (all characters to type with the right hand), and has the same easy to remember commandline syntax like kill itself:
slay -SIGNAL user [user  ]
But beware, slay is

not only for BOFHs, but also from a BOFH It has a mean mode which is activated by default. With mean mode on, it won t kill the given user but the user who called the program if it is invoked as an ordinary user without root rights. *g* Interestingly I never ran into this issue despite I use this program often and for many years now. But some Ubuntu users did, probably because adding a sudo in front of some command is easier to forget than doing an ssh root@localhost or su - beforehand. They even seem to be so desperate about it that they forwarded the issue from Launchpad to the Debian Bug Tracking System. ;-) But to be honest even if I was very amused about those bug reports isn t this issue grave , as it causes very likely (unexpected) data loss?

Now playing: Monzy kill dash nine ( and your process is mine ;-)

10 August 2011

Axel Beckert: git $something -p

git add -p is one of my favourite git features. It lets you selectively add the local changes hunk by hunk to the staging area. This is especially nice if you want to commit one change in a file, but not a second one, you also already did. Recently I noticed that you can also selectively revert changes already in the staging area using git reset -p HEAD. The user interface is exactly the same as for git add -p. Today I discovered another selective undo in git by just trying it out of curiosity if that works, too: Undoing local changes selectively using git checkout -p. Maybe less useful than those mentioned above, but nevertheless most times quicker than firing up your favourite editor and undoing the changes manually. Another nice git feature which I discovered by accidentially using it (this time even unwittingly) is git checkout - which behaves like cd -, just for branches instead of directories, i.e. it switches back to the previously checked out branch. Very useful for quickly changing between two branches again and again.

8 August 2011

Axel Beckert: Finding libraries not marked as automatically installed with aptitude

This is a direct followup on my blog posting Finding packages for deinstallation on the commandline with aptitude. In the meanwhile on more alias for finding obosolete packages made it into my zsh configuration. It s an alias to find installed libraries, -data, -common and other usually only automatically installed packages, which are not marked as being installed automatically nevertheless:
alias aptitude-review-unmarkauto-libraries='aptitude -o "Aptitude::Pkg-Display-Limit=( ^lib !-dev$ !-dbg$ !-utils$ !-tools$ !-bin$ !-doc$ !^libreoffice   -data$   -common$   -base$ !^r-base ) !~M"'
And yes, this pattern is slightly larger than those from the previous posting, so here s the used filter in a little bit more readable way:
(
  ^lib
    !-dev$
    !-dbg$
    !-utils$
    !-tools$
    !-bin$
    !-doc$
    !^libreoffice   
  -data$   
  -common$   
  -base$
    !^r-base
)
!~M
It matches all non-automatically installed packages whose name starts with lib , but is neither a debug symbols package, a development header package, a documentation package, a package containing supplied commands, nor a libreoffice package. Additionally it matches all non-automatically installed packages ending in -data, -common, or -base, but excludes r-base packages. Of course you can then mark any erroneously unmarked library by pressing M (Shift-m). If you press g for Go afterwards and wonder why nothing to remove shows up, be reminded that the filter limit is active in this view, too. So press l for Limit and then Ctrl-u to erase the current filter limit of this view and press enter to set the new (now empty) filter, et voil Hope this is of help for some others, too.

Next.

Previous.